9 research outputs found
Efficient Constellation-Based Map-Merging for Semantic SLAM
Data association in SLAM is fundamentally challenging, and handling ambiguity
well is crucial to achieve robust operation in real-world environments. When
ambiguous measurements arise, conservatism often mandates that the measurement
is discarded or a new landmark is initialized rather than risking an incorrect
association. To address the inevitable `duplicate' landmarks that arise, we
present an efficient map-merging framework to detect duplicate constellations
of landmarks, providing a high-confidence loop-closure mechanism well-suited
for object-level SLAM. This approach uses an incrementally-computable
approximation of landmark uncertainty that only depends on local information in
the SLAM graph, avoiding expensive recovery of the full system covariance
matrix. This enables a search based on geometric consistency (GC) (rather than
full joint compatibility (JC)) that inexpensively reduces the search space to a
handful of `best' hypotheses. Furthermore, we reformulate the commonly-used
interpretation tree to allow for more efficient integration of clique-based
pairwise compatibility, accelerating the branch-and-bound max-cardinality
search. Our method is demonstrated to match the performance of full JC methods
at significantly-reduced computational cost, facilitating robust object-based
loop-closure over large SLAM problems.Comment: Accepted to IEEE International Conference on Robotics and Automation
(ICRA) 201
Complexity Analysis and Efficient Measurement Selection Primitives for High-Rate Graph SLAM
Sparsity has been widely recognized as crucial for efficient optimization in
graph-based SLAM. Because the sparsity and structure of the SLAM graph reflect
the set of incorporated measurements, many methods for sparsification have been
proposed in hopes of reducing computation. These methods often focus narrowly
on reducing edge count without regard for structure at a global level. Such
structurally-naive techniques can fail to produce significant computational
savings, even after aggressive pruning. In contrast, simple heuristics such as
measurement decimation and keyframing are known empirically to produce
significant computation reductions. To demonstrate why, we propose a
quantitative metric called elimination complexity (EC) that bridges the
existing analytic gap between graph structure and computation. EC quantifies
the complexity of the primary computational bottleneck: the factorization step
of a Gauss-Newton iteration. Using this metric, we show rigorously that
decimation and keyframing impose favorable global structures and therefore
achieve computation reductions on the order of and , respectively,
where is the pruning rate. We additionally present numerical results
showing EC provides a good approximation of computation in both batch and
incremental (iSAM2) optimization and demonstrate that pruning methods promoting
globally-efficient structure outperform those that do not.Comment: Pre-print accepted to ICRA 201
Collision Probabilities for Continuous-Time Systems Without Sampling [with Appendices]
Demand for high-performance, robust, and safe autonomous systems has grown
substantially in recent years. Fulfillment of these objectives requires
accurate and efficient risk estimation that can be embedded in core
decision-making tasks such as motion planning. On one hand, Monte-Carlo (MC)
and other sampling-based techniques can provide accurate solutions for a wide
variety of motion models but are cumbersome to apply in the context of
continuous optimization. On the other hand, "direct" approximations aim to
compute (or upper-bound) the failure probability as a smooth function of the
decision variables, and thus are widely applicable. However, existing
approaches fundamentally assume discrete-time dynamics and can perform
unpredictably when applied to continuous-time systems operating in the real
world, often manifesting as severe conservatism. State-of-the-art attempts to
address this within a conventional discrete-time framework require additional
Gaussianity approximations that ultimately produce inconsistency of their own.
In this paper we take a fundamentally different approach, deriving a risk
approximation framework directly in continuous time and producing a lightweight
estimate that actually improves as the discretization is refined. Our
approximation is shown to significantly outperform state-of-the-art techniques
in replicating the MC estimate while maintaining the functional and
computational benefits of a direct method. This enables robust, risk-aware,
continuous motion-planning for a broad class of nonlinear, partially-observable
systems.Comment: To appear at RSS 202
Sparsity and computation reduction for high-rate visual-inertial odometry
Thesis: S.M., Massachusetts Institute of Technology, Department of Aeronautics and Astronautics, 2017.Cataloged from PDF version of thesis.Includes bibliographical references (pages 147-151).The navigation problem for mobile robots operating in unknown environments can be posed as a subset of Simultaneous Localization and Mapping (SLAM). For computationally-constrained systems, maintaining and promoting system sparsity is key to achieving the high-rate solutions required for agile trajectory tracking. This thesis focuses on the computation involved in the elimination step of optimization, showing it to be a function of the corresponding graph structure. This observation directly motivates the search for measurement selection techniques to promote sparse structure and reduce computation. While many sophisticated selection techniques exist in the literature, relatively little attention has been paid to the simple yet ubiquitous heuristic of decimation. This thesis shows that decimation produces graphs with an inherently sparse, partitioned super-structure. Furthermore, it is shown analytically for single-landmark graphs that the even spacing of observations characteristic of decimation is near optimal in a weighted number of spanning trees sense. Recent results in the SLAM community suggest that maximizing this connectivity metric corresponds to good information-theoretic performance. Simulation results confirm that decimation-style strategies perform as well or better than sophisticated policies which require significant computation to execute. Given that decimation consumes negligible computation to evaluate, its performance demonstrated here makes decimation a formidable measurement selection strategy for high-rate, realtime SLAM solutions. Finally, the SAMWISE visual-inertial estimator is described, and thorough experimental results demonstrate its robustness in a variety of scenarios, particularly to the challenges prescribed by the DARPA Fast Lightweight Autonomy program.This thesis was supported by the Defense Advanced Research Projects Agency (DARPA) under the Fast Lightweight Autonomy program.by Kristoffer M. Frey.S.M
Belief-Space Planning for Real-World Systems: Efficient SLAM-Based Belief Propagation and Continuous-Time Safety
Uncertainty-aware planning has long been a recurring goal in robotics. By enabling autonomous systems to explicitly reason about their own uncertainty, desirable behaviors that increase observability and ensure robust constraint satisfaction arise naturally from high-level optimization specifications. For partially-observable and under-sensed systems in particular, belief-space planning (BSP) provides a natural probabilistic formulation. Despite significant research attention over the years, a few key challenges have prevented the application of BSP to the real-world systems that would stand to benefit the most, such as SLAM-reliant Micro-Aerial Vehicles (MAVs).
The most fundamental of these challenges is that of efficiently propagating the state belief, particularly under SLAM-based estimation schemes like Visual-Inertial Odometry (VIO). This thesis describes a structureless and consistent approximation for
belief propagation under SLAM, the efficacy of which is demonstrated in the challenging setting of observability-aware planning for VIO.
A key attraction of BSP is the ability to specify constraints on the total probability of failure – however, actually encoding these constraints within practical optimization schemes remains a challenge, particularly for physical systems, which evolve continuously in time. General-purpose Monte-Carlo methods can be used to accurately assess failure rates, but these are cumbersome to optimize against, while more convenient “direct” estimates are based on discrete-time simplifications and fail to meaningfully constrain the full continuous-time risk. To address this gap, a novel risk estimate is derived directly in continuous-time, providing a principled, lightweight, and convenient means of ensuring probabilistic safety for real-world systems. Together, these contributions enable online, risk-constrained BSP for a large class of systems of widespread practical interest.Ph.D
Robust Object-based SLAM for High-speed Autonomous Navigation
We present Robust Object-based SLAM for High-speed Autonomous Navigation (ROSHAN), a novel approach to object-level mapping suitable for autonomous navigation. In ROSHAN, we represent objects as ellipsoids and infer their parameters using three sources of information - bounding box detections, image texture, and semantic knowledge - to overcome the observability problem in ellipsoid-based SLAM under common forward-translating vehicle motions. Each bounding box provides four planar constraints on an object surface and we add a fifth planar constraint using the texture on the objects along with a semantic prior on the shape of ellipsoids. We demonstrate ROSHAN in simulation where we outperform the baseline, reducing the median shape error by 83% and the median position error by 72% in a forward-moving camera sequence. We demonstrate similar qualitative result on data collected on a fast-moving autonomous quadrotor.NASA (Award NNX15AQ50A)DARPA (Contract HR0011-15-C-0110